We started looking at a class of algorithms called search algorithms yesterday. This is
kind of the most basic set of algorithms, AI algorithms, and they're predicated. You
can only apply them if you can formulate the search problem as a set of states, a
set of actions that actually get you from state to state, and some criterion on
whether you've hit a goal state. If you have that and possibly a cost function,
then you're in business. You can just take one of these algorithms that we
looked at yesterday from the shelf, apply it, and everything works. I would like you
to basically look at this definition. It basically says given a search problem P
equals blah-de-blah-de-blah, okay, four components, and that actually really
tells us very well the class of algorithms we're going to look at.
They're applicable to anything you can formulate as a search problem. So if
you've done the work of formulating it the right way, we can just pick an
algorithm. And we're going to study algorithms that come from a quite
general class, which is basically algorithms that generate what we call a
search tree and systematically go through the search tree as prescribed by
a strategy. And we're going to just vary the strategies. Everything else stays the
same, right? We have a loop, meaning we do something over and over. We're
looking at nodes in the tree. We expand them if they can be expanded. If not, we
failed. And then we look if the expanded node, if the node that we are, that we're
looking at, if that's a, if that was a goal node, then we succeed. Otherwise, we
recursively look at all of the, at all of the new newly expanded nodes. Okay? This
here is the search tree for Romania. We're starting out with the initial state.
That's the being in Arad. And the green stuff actually is only kind of a preview
of what's to come so that you know we're actually dealing with the tree. The
algorithm doesn't see all of this. Okay? So there's a difference between you
looking at the slides and trying to understand the algorithm and the
algorithm itself. At this point, it only sees Arad. Okay, so what do we do? Well, is
this a goal node?
No. It's not Bucharest. So what do we do? We actually expand it, which means we
compute all the successors. One, two, three. Okay? And then we choose one of the
nodes that have not been expanded yet. And then we're essentially in the same
situation we were before. We're at a situation where we have a node, we look
whether it's a goal node, in this case CBU it is not, so what do we do? We
compute all the successors, we choose a new node, in this case we've chosen
Zarend, we expand no goal nodes in sight, and that is what we do over and over and
over again. There is one structure I want you to notice, it's the list or the set
of all unexpanded nodes. We're going to call that the fringe and we always take
one of these. And the only place these algorithms will differ is how we
choose the next node from the fringe. And that's what we call it. This
choice we call a strategy and the strategy will determine how we go down
the tree search. One thing we talked about and that's important to realize
here is that we are actually not in the state space itself, we are in a state
tree which we're exploring. Essentially rather than have the complicated graph
algorithms, we kind of unfold the graph into a tree from the perspective of the
initial node. Which brings us a couple of problems, namely we might have repeated
states because different states may actually appear multiple times in the
tree, but it gives us a very very very simple algorithm that we can readily
implement and I think you already have implemented it in Prolog which for some
of the strategies is easier than for others as you will just have discovered.
Okay we will always in this course try and keep tabs on how bad our algorithms
Presenters
Zugänglich über
Offener Zugang
Dauer
00:12:54 Min
Aufnahmedatum
2020-10-27
Hochgeladen am
2020-10-27 15:47:16
Sprache
en-US
Recap: Search
Main video on the topic in chapter 7 clip 3.